- Update Rails to 6.1
- For Zeitwerk, Barbeque::SNSSubscription and Barbeque::SNSSubscriptionService are renamed to Barbeque::SnsSubscription and Barbeque::SnsSubscriptionService respectively
- Support Ruby 3.0
- Drop support for Ruby < 3.0
- Drop support for MySQL 5.7
- A new migration is added to fix collations properly in MySQL 8.0
- Pass
BARBEQUE_SENT_TIMESTAMP
variable to invoked jobs- The value is epoch time in milliseconds when the message is sent to the queue. See also: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html
- Use kaminari helper to generate links safely
- Delete the same message multiple times when DeleteMessage results in partial deletion of copies
- Accept retry configuration on create
- Wrap JSON message in pre tag for large message
- Change job name to case-sensitive
- Do not count pending retried job executions when checking
maximum_concurrent_executions
- Add "Notify failure event to Slack only if retry limit reached" option
- Change the default value of "Base delay" option from 0.3 seconds to 15 seconds
- Add server-side retry feature
- Stop deleting job executions when job definition is deleted
- Job executions tend to have large number of records, so deleting them is impossible.
- Return 503 in maintenance mode when mysql2 error occurs
- Use
BARBEQUE_HOST
environment variable to generatehtml_url
field in API response
- Add selectable
message
field toGET /v1/job_executions/:message_id
response - Add
BARBEQUE_VERIFY_ENQUEUED_JOBS
flag to API server which enables the feature that verifies the enqueued job by accessing MySQL - Add
delay_seconds
parameter to support SQS's delay_seconds- This also supports ActiveJob's enqueue_at method.
- Show all SNS topics in /sns_subscriptions/new
- Update Rails to 5.2
- Add index to barbeque_job_executions.created_at
- Be careful when you have large number of records in barbeque_job_executions table.
- Limit concurrent executions per job queue
maximum_concurrent_executions
was applied to all job executions regardless of job queues.- Now
maximum_concurrent_executions
is applied to each job queue.
- Poll job executions and job retries only of the specified queue
- All execution pollers ware polling all job execution/retry statuses.
- Now execution pollers poll execution/retry statuses of their own job queue.
- Support Hako definitions written in Jsonnet
- Jsonnet format is supported since Hako v2.0.0
- Rename yaml_dir to definition_dir in config/barbeque.yml
- yaml_dir is still supported with warnings for now
- Build queue_url without database when maintenance mode is enabled
- See #58 for detail
- Job execution URL was changed from
/job_executions/:id
to/job_executions/:message_id
- Barbeque v1.0 links are redirected to v2.0 links
- Job retry URL
/job_executions/:id/job_retries/:id
is also redirected to/job_executions/:message_id/job_retries/:id
- Do not create execution record when sqs:DeleteMessage returns error
- Update aws-sdk to v3
- Use modularized aws-sdk gems
- Filter job executions by status
- Show SQS metrics in job queue page
- Update plotly.js to v1.29.3
- Do not truncate hover labels in /monitors chart
- Fix Slack notification field in job definition form
- Extract S3 client for hako tasks
- Do not create job_execution record when S3 returns error
- Ignore S3 errors when starting an execution
- Set descriptive title element
- Add breadcrumbs to all pages
- Update Rails to 5.1
- Add message context to exception handler
- Now exception handler is able to track which message is being processed when an exception is raised
- Set status to running after creating related records
- Introduce Executor as a replacement of Runner
runner
andrunner_options
is renamed toexecutor
andexecutor_options
respectively- Now
rake barbeque:worker
launches three types of process- Runner: receives message from SQS queue, starts job execution and stores its identifier to the database
- In Executor::Docker, the identifier is container id
- In Executor::Hako, the identifier is ECS cluster and task ARN
- ExecutionPoller: polls execution status and reflect it to the database
- In Executor::Docker, uses
docker inspect
command - In Executor::Docker, uses S3 task notification JSON
- In Executor::Docker, uses
- RetryPoller: polls retry status and reflect it to the database
- Same with ExecutionPoller
- Runner: receives message from SQS queue, starts job execution and stores its identifier to the database
- Add
maximum_concurrent_executions
configuration to config/barbeque.yml- It controls the number of concurrent job executions
- The limit is disabled by default
- Drop support for legacy S3 log format
- Run migration script before upgrading to v1.0.0
- Add
sqs_receive_message_wait_time
configuration to config/barbeque.yml- This option controls ReceiveMessageWaitTimeSeconds attribute of SQS queue
- The default value is changed from 20s to 10s
- Change S3 log format #29
- The legacy format saves
{message: message.body.to_json, stdout: stdout, stderr: stderr}.to_json
to#{app}/#{job}/#{message_id}
- The new format saves message body to
#{app}/#{job}/#{message_id}/message.json
, stdout to#{app}/#{job}/#{message_id}/stdout.txt
, and stderr to#{app}/#{job}/#{message_id}/stderr.txt
- The legacy format is still supported in v0.7.0, but will be removed in v1.0.0
- Migration script is available: tools/s3-log-migrator.rb
- The legacy format saves
- Add "running" status #28
- Kill N+1 query #27
- Show application names for each job definition in SNS subscriptions #26
- Support JSON-formatted string as Notification massages #25
- Destroy SNS subscriptions before destroying job definition #24
- Log message body in error status for retry #23
- Add error status to job_execution #22
- Add error handling for AWS SNS API calls #21
- Support fan-out executions using AWS SNS notifications #20
- Fix job_retry order in job_execution page #16
- Fix Back path to each job definition page #17
- Fix "active" class in sidebar #18
- Add new page to show recently processed jobs #19
- Autolink URLs in job_retry outputs #15
- Make operation to deduplicate messages atomic #14
- Add execution id and html_url to status API response #13
- Fix bug in execution statistics #12
- Add Hako runner #11
- Handle S3 error on web console #10
- Reuse AWS credentials assumed from Role #9
- Move statistics button to upper right on job definition page
- Link app from job definitions index
- Autolink stdout and stderr #8
- Report exception raised in SQS message parser
- Allow logging worker exception by Raven #7
- Allow switching log output by
BARBEQUE_LOG_TO_STDOUT
#6
- Destroy job definitions after their app destruction #4