Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clickup integrations #1

Closed
wants to merge 205 commits into from
Closed

Clickup integrations #1

wants to merge 205 commits into from

Conversation

oiadebayo
Copy link
Owner

@oiadebayo oiadebayo commented Aug 21, 2024

Description

What -
Why -
How -

Type of change

Please leave one option from the following and delete the rest:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • New Integration (non-breaking change which adds a new integration)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Non-breaking change (fix of existing functionality that will not change current behavior)
  • Documentation (added/updated documentation)

Screenshots

Include screenshots from your environment showing how the resources of the integration will look.

API Documentation

Provide links to the API documentation used for this integration.

oiadebayo and others added 30 commits June 30, 2024 13:53
- Added support for both folder less and folder based projects
- Added custom properties of workspace id to each project
Fixed import statement for custom properties
Fixed import statement for custom properties
added team_id to issues in folder
- Fixed getting folders in space
- Fixed username not string
Fixed data mapping problems
Fixed data mapping
Fixed assignees issues
Fixing date mapping
Made username string
Fixed unterminated if
Correcting data mapping
Formatted and fixed webhook to listen for list instead of folders
- Removed unnecessary default
- Replaced username with email
- Corrected the tracking of Team ID and cached the response that are needed repeatedly in an event.
Fixed folder export error
portmachineuser and others added 29 commits September 22, 2024 16:46
# Description

What - added logs

Why - better UX

How - 

## Type of change

Please leave one option from the following and delete the rest:

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] New Integration (non-breaking change which adds a new integration)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Non-breaking change (fix of existing functionality that will not
change current behavior)
- [ ] Documentation (added/updated documentation)

<h4> All tests should be run against the port production
environment(using a testing org). </h4>

### Core testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync finishes successfully
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Scheduled resync able to abort existing resync and start a new one
- [ ] Tested with at least 2 integrations from scratch
- [ ] Tested with Kafka and Polling event listeners
- [ ] Tested deletion of entities that don't pass the selector


### Integration testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Resync finishes successfully
- [ ] If new resource kind is added or updated in the integration, add
example raw data, mapping and expected result to the `examples` folder
in the integration directory.
- [ ] If resource kind is updated, run the integration with the example
data and check if the expected result is achieved
- [ ] If new resource kind is added or updated, validate that
live-events for that resource are working as expected
- [ ] Docs PR link [here](#)

### Preflight checklist

- [ ] Handled rate limiting
- [ ] Handled pagination
- [ ] Implemented the code in async
- [ ] Support Multi account

## Screenshots

Include screenshots from your environment showing how the resources of
the integration will look.

## API Documentation

Provide links to the API documentation used for this integration.
# Description

**What**  
- Improved the mechanism for parallel fetching of AWS account resources.
- Fixed `ExpiredTokenException` by replacing the event-based caching
system with a time-dependent caching mechanism. The new approach ensures
that the role is reassumed and session credentials are refreshed when
80% of the session duration has been used.

**Why**  
- The previous event-based caching system led to the
`ExpiredTokenException`, causing session credentials to expire
unexpectedly.
- Implementing a time-dependent caching mechanism ensures that session
credentials are refreshed proactively, preventing disruptions.

**How**  
- Replaced the resync-dependent caching system with a time-based cache
that monitors the session expiry.
- Added logic to reassume the role and refresh credentials once 80% of
the session duration has passed, improving session reliability.

## Type of change

Please leave one option from the following and delete the rest:

- [x] Bug fix (non-breaking change which fixes an issue)

<h4> All tests should be run against the port production
environment(using a testing org). </h4>

### Core testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync finishes successfully
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Scheduled resync able to abort existing resync and start a new one
- [ ] Tested with at least 2 integrations from scratch
- [ ] Tested with Kafka and Polling event listeners
- [ ] Tested deletion of entities that don't pass the selector


### Integration testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Resync finishes successfully
- [ ] If new resource kind is added or updated in the integration, add
example raw data, mapping and expected result to the `examples` folder
in the integration directory.
- [ ] If resource kind is updated, run the integration with the example
data and check if the expected result is achieved
- [ ] If new resource kind is added or updated, validate that
live-events for that resource are working as expected
- [ ] Docs PR link [here](#)

### Preflight checklist

- [ ] Handled rate limiting
- [ ] Handled pagination
- [ ] Implemented the code in async
- [ ] Support Multi account

## Screenshots

Include screenshots from your environment showing how the resources of
the integration will look.

## API Documentation

Provide links to the API documentation used for this integration.
…labs#1061)

# Description

What - The Gitlab folder kind had a bug where it get stuck in infinite
loop, and the same data get returned for every page index. Upon
investigation, it was discovered that the pagination parameters,
especially the [keyset
pagination](https://docs.gitlab.com/ee/api/rest/index.html#supported-resources)
was behind this error since the docs does not provide options for
controlling the pagination on the repository tree endpoint.

Why - 

How - This was resolved by using the standard offset pagination where we
pass the page index and page size to the repository tree API

## Type of change

Please leave one option from the following and delete the rest:

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] New Integration (non-breaking change which adds a new integration)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Non-breaking change (fix of existing functionality that will not
change current behavior)
- [ ] Documentation (added/updated documentation)

<h4> All tests should be run against the port production
environment(using a testing org). </h4>

### Core testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync finishes successfully
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Scheduled resync able to abort existing resync and start a new one
- [ ] Tested with at least 2 integrations from scratch
- [ ] Tested with Kafka and Polling event listeners
- [ ] Tested deletion of entities that don't pass the selector


### Integration testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Resync finishes successfully
- [ ] If new resource kind is added or updated in the integration, add
example raw data, mapping and expected result to the `examples` folder
in the integration directory.
- [ ] If resource kind is updated, run the integration with the example
data and check if the expected result is achieved
- [ ] If new resource kind is added or updated, validate that
live-events for that resource are working as expected
- [ ] Docs PR link [here](#)

### Preflight checklist

- [ ] Handled rate limiting
- [ ] Handled pagination
- [ ] Implemented the code in async
- [ ] Support Multi account

## Screenshots

<img width="1136" alt="Screenshot 2024-10-02 at 4 52 11 PM"
src="https://github.com/user-attachments/assets/f19a2f9f-8d12-4289-865b-8fdd39a1fcee">
<img width="1136" alt="Screenshot 2024-10-02 at 4 52 05 PM"
src="https://github.com/user-attachments/assets/696bb1bf-41ca-474a-beb9-744df2dd6b4d">

## API Documentation

Provide links to the API documentation used for this integration.
# Description

What - Added a util `semaphore_async_iterator` to enable seamless
control over concurrent executions per kind.

Why - Simplifies the process of implementing concurrent limit when
streaming async tasks. Works magic with the existing
`stream_async_iterators_task` util for concurrency control.

How - Utilized asyncio Semaphore context manager, works with Bounded and
Non Bounded Semaphores

## Type of change

Please leave one option from the following and delete the rest:

- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] New Integration (non-breaking change which adds a new integration)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Non-breaking change (fix of existing functionality that will not
change current behavior)
- [ ] Documentation (added/updated documentation)

<h4> All tests should be run against the port production
environment(using a testing org). </h4>

### Core testing checklist

- [x] Integration able to create all default resources from scratch
- [x] Resync finishes successfully
- [x] Resync able to create entities
- [x] Resync able to update entities
- [x] Resync able to detect and delete entities
- [x] Scheduled resync able to abort existing resync and start a new one
- [x] Tested with at least 2 integrations from scratch
- [x] Tested with Kafka and Polling event listeners
- [x] Tested deletion of entities that don't pass the selector


### Integration testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Resync finishes successfully
- [ ] If new resource kind is added or updated in the integration, add
example raw data, mapping and expected result to the `examples` folder
in the integration directory.
- [ ] If resource kind is updated, run the integration with the example
data and check if the expected result is achieved
- [ ] If new resource kind is added or updated, validate that
live-events for that resource are working as expected
- [ ] Docs PR link [here](#)

### Preflight checklist

- [ ] Handled rate limiting
- [ ] Handled pagination
- [ ] Implemented the code in async
- [ ] Support Multi account

## Screenshots

Include screenshots from your environment showing how the resources of
the integration will look.

## API Documentation

Provide links to the API documentation used for this integration.
# Description

- Added a new kind `stage`
- Added a new blueprint `jenkinsStage`

### Implementation
Utilized the Jenkins API provided by the
[pipeline-stage-view-plugin](https://github.com/jenkinsci/pipeline-stage-view-plugin/tree/master/rest-api#get-jobjob-namewfapiruns)
to retrieve pipeline stage information.

The API returns details such as stage IDs, names, statuses, start times,
durations, and links to each stage, as shown in the example JSON
response below:

```json
{
    "_links": {
        "self": {
            "href": "/job/Phalbert/job/airframe-react/job/master/29/wfapi/describe"
        }
    },
    "id": "29",
    "name": "port-labs#29",
    "status": "FAILED",
    "startTimeMillis": 1717069181870,
    "endTimeMillis": 1717070384780,
    "durationMillis": 1202910,
    "queueDurationMillis": 5,
    "pauseDurationMillis": 0,
    "stages": [
        {
            "_links": {
                "self": {
                    "href": "/job/Phalbert/job/airframe-react/job/master/29/execution/node/6/wfapi/describe"
                }
            },
            "id": "6",
            "name": "Declarative: Checkout SCM",
            "execNode": "",
            "status": "SUCCESS",
            "startTimeMillis": 1717070383791,
            "durationMillis": 892,
            "pauseDurationMillis": 0
        },
        {
            "_links": {
                "self": {
                    "href": "/job/Phalbert/job/airframe-react/job/master/29/execution/node/17/wfapi/describe"
                }
            },
            "id": "17",
            "name": "Declarative: Post Actions",
            "execNode": "",
            "status": "SUCCESS",
            "startTimeMillis": 1717070384739,
            "durationMillis": 24,
            "pauseDurationMillis": 0
        }
    ]
}
```

Additional Context: For more details and ongoing discussion, please
refer to the linked Slack thread: [Jira Task
Discussion](https://getport.slack.com/archives/C0799SR843F/p1723749916041039).

## Type of change

Please leave one option from the following and delete the rest:

- [x] New feature (non-breaking change which adds functionality)

<h4> All tests should be run against the port production
environment(using a testing org). </h4>

### Core testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync finishes successfully
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Scheduled resync able to abort existing resync and start a new one
- [ ] Tested with at least 2 integrations from scratch
- [ ] Tested with Kafka and Polling event listeners


### Integration testing checklist

- [x] Integration able to create all default resources from scratch
- [x] Resync able to create entities
- [x] Resync able to update entities
- [x] Resync able to detect and delete entities
- [x] Resync finishes successfully
- [x] If new resource kind is added or updated in the integration, add
example raw data, mapping and expected result to the `examples` folder
in the integration directory.
- [x] If resource kind is updated, run the integration with the example
data and check if the expected result is achieved
- [x] If new resource kind is added or updated, validate that
live-events for that resource are working as expected
- [x] Docs PR link
[here](port-labs/port-docs#1613)

### Preflight checklist

- [x] Handled rate limiting
- [x] Handled pagination
- [x] Implemented the code in async
- [ ] Support Multi account

## Screenshots

Include screenshots from your environment showing how the resources of
the integration will look.

## API Documentation

Provide links to the API documentation used for this integration.

---------

Co-authored-by: PagesCoffy <[email protected]>
Co-authored-by: omby8888 <[email protected]>
# Description

What - Fix miscalculated changelog version step 0.12.1 -> ~0.12.12~
0.12.2

Why - 

How -

## Type of change

Please leave one option from the following and delete the rest:

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] New Integration (non-breaking change which adds a new integration)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Non-breaking change (fix of existing functionality that will not
change current behavior)
- [ ] Documentation (added/updated documentation)

<h4> All tests should be run against the port production
environment(using a testing org). </h4>

### Core testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync finishes successfully
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Scheduled resync able to abort existing resync and start a new one
- [ ] Tested with at least 2 integrations from scratch
- [ ] Tested with Kafka and Polling event listeners
- [ ] Tested deletion of entities that don't pass the selector


### Integration testing checklist

- [ ] Integration able to create all default resources from scratch
- [ ] Resync able to create entities
- [ ] Resync able to update entities
- [ ] Resync able to detect and delete entities
- [ ] Resync finishes successfully
- [ ] If new resource kind is added or updated in the integration, add
example raw data, mapping and expected result to the `examples` folder
in the integration directory.
- [ ] If resource kind is updated, run the integration with the example
data and check if the expected result is achieved
- [ ] If new resource kind is added or updated, validate that
live-events for that resource are working as expected
- [ ] Docs PR link [here](#)

### Preflight checklist

- [ ] Handled rate limiting
- [ ] Handled pagination
- [ ] Implemented the code in async
- [ ] Support Multi account

## Screenshots

Include screenshots from your environment showing how the resources of
the integration will look.

## API Documentation

Provide links to the API documentation used for this integration.
@oiadebayo oiadebayo closed this Oct 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.