Skip to content

Commit

Permalink
Add FIPS switch (#19179)
Browse files Browse the repository at this point in the history
* Add FIPS workflow file

* Add Windows steps

* Experiment with download from S3 for Windows

* Revert to building openssl

* Switch Windows steps to download from S3

* Remove unnecessary steps

* Add FIPS_MODULE_VERSION for Linux

* Finish handling Python in setup

* Remove unnecessary steps

* Add md5 tests

* Make md5 tests pass

* Try separating FIPS and non-FIPS md5 tests

* Add e2e tests for TLS FIPS

* Make TLS E2E tests pass

* Switch from env vars to C bindings

* Revert to using env vars

* Add option for e2e env vars in workflow

* Remove unnecessary comments from start-server.sh

* Rework enable_fips for user env var overwrite

* Disable FIPS tests by default in master

* Add changelogs

* Fix license headers

* Remove unfinished tests

* Remove openssl.cnf workaround

* Remove unused compose file

* Fix license headers

* Bring back integration tests

* Experiment with integration tests

* Remove integration test files

* Restore pr.yml and test-target.yml

* Move FIPS workflow to test-fips.yml

* Fix pytest "not fips" args

* Update test-fips.yml

* Fix unvalid workflow

* Modify JOB_NAME env var

* Re-introduce experimental integration tests

* Merge e2e tests and clean test-fips workflow

* Merge integration tests and use monkeypatch in setup fixture

* Attemp to fix experimental workflow

* Replace ddev with pytest in experimental workflow

* Revert "Replace ddev with pytest in experimental workflow"

This reverts commit fda181f.

* Remove experimental tests from PR

* Add unit tests for env var logic

* Switch to using marks to exclude fips from test-target

* Revert "Switch to using marks to exclude fips from test-target"

This reverts commit 3e3e51a.
  • Loading branch information
dkirov-dd authored Dec 26, 2024
1 parent 642b2f9 commit c0d1c42
Show file tree
Hide file tree
Showing 16 changed files with 463 additions and 15 deletions.
151 changes: 151 additions & 0 deletions .github/workflows/test-fips.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
name: Test FIPS E2E

on:
workflow_dispatch:
inputs:
agent-image:
description: "Agent image to use"
required: false
type: string
target:
description: "Target to test"
required: false
type: string
pull_request:
path:
- datadog_checks_base/datadog_checks/**
schedule:
- cron: '0 0,8,16 * * *'

defaults:
run:
shell: bash

jobs:
run:
name: "Test FIPS"
runs-on: ["ubuntu-22.04"]

env:
FORCE_COLOR: "1"
PYTHON_VERSION: "3.12"
DDEV_E2E_AGENT: "${{ inputs.agent-image || 'datadog/agent-dev:master-fips' }}"
# Test results for later processing
TEST_RESULTS_BASE_DIR: "test-results"
# Tracing to monitor our test suite
DD_ENV: "ci"
DD_SERVICE: "ddev-integrations-core"
DD_TAGS: "team:agent-integrations"
DD_TRACE_ANALYTICS_ENABLED: "true"
# Capture traces for a separate job to do the submission
TRACE_CAPTURE_BASE_DIR: "trace-captures"
TRACE_CAPTURE_LOG: "trace-captures/output.log"

steps:

- name: Set environment variables with sanitized paths
run: |
JOB_NAME="test-fips"
echo "TEST_RESULTS_DIR=$TEST_RESULTS_BASE_DIR/$JOB_NAME" >> $GITHUB_ENV
echo "TRACE_CAPTURE_FILE=$TRACE_CAPTURE_BASE_DIR/$JOB_NAME" >> $GITHUB_ENV
- uses: actions/checkout@v4

- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
with:
python-version: "${{ env.PYTHON_VERSION }}"
cache: 'pip'

- name: Restore cache
uses: actions/cache/restore@v4
with:
path: '~/.cache/pip'
key: >-
${{ format(
'v01-python-{0}-{1}-{2}-{3}',
env.pythonLocation,
hashFiles('datadog_checks_base/pyproject.toml'),
hashFiles('datadog_checks_dev/pyproject.toml'),
hashFiles('ddev/pyproject.toml')
)}}
restore-keys: |-
v01-python-${{ env.pythonLocation }}
- name: Install ddev from local folder
run: |-
pip install -e ./datadog_checks_dev[cli]
pip install -e ./ddev
- name: Configure ddev
run: |-
ddev config set repos.core .
ddev config set repo core
- name: Prepare for testing
env:
PYTHONUNBUFFERED: "1"
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_ACCESS_TOKEN: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ORACLE_DOCKER_USERNAME: ${{ secrets.ORACLE_DOCKER_USERNAME }}
ORACLE_DOCKER_PASSWORD: ${{ secrets.ORACLE_DOCKER_PASSWORD }}
SINGLESTORE_LICENSE: ${{ secrets.SINGLESTORE_LICENSE }}
DD_GITHUB_USER: ${{ github.actor }}
DD_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: ddev ci setup ${{ inputs.target || 'tls' }}

- name: Set up trace capturing
env:
PYTHONUNBUFFERED: "1"
run: |-
mkdir "${{ env.TRACE_CAPTURE_BASE_DIR }}"
python .ddev/ci/scripts/traces.py capture --port "8126" --record-file "${{ env.TRACE_CAPTURE_FILE }}" > "${{ env.TRACE_CAPTURE_LOG }}" 2>&1 &
- name: Run E2E tests with FIPS disabled
env:
DD_API_KEY: "${{ secrets.DD_API_KEY }}"
run: |
ddev env test -e GOFIPS=0 --new-env --junit ${{ inputs.target || 'tls' }} -- all -m "fips_off"
- name: Run E2E tests with FIPS enabled
env:
DD_API_KEY: "${{ secrets.DD_API_KEY }}"
run: |
ddev env test -e GOFIPS=1 --new-env --junit ${{ inputs.target || 'tls' }} -- all -k "fips_on"
- name: View trace log
if: always()
run: cat "${{ env.TRACE_CAPTURE_LOG }}"

- name: Upload captured traces
if: always()
uses: actions/upload-artifact@v4
with:
name: "traces-${{ inputs.target || 'tls' }}"
path: "${{ env.TRACE_CAPTURE_FILE }}"

- name: Finalize test results
if: always()
run: |-
mkdir -p "${{ env.TEST_RESULTS_DIR }}"
if [[ -d ${{ inputs.target || 'tls' }}/.junit ]]; then
mv ${{ inputs.target || 'tls' }}/.junit/*.xml "${{ env.TEST_RESULTS_DIR }}"
fi
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4
with:
name: "test-results-${{ inputs.target || 'tls' }}"
path: "${{ env.TEST_RESULTS_BASE_DIR }}"

- name: Upload coverage data
if: >
!github.event.repository.private &&
always()
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: "${{ inputs.target || 'tls' }}/coverage.xml"
flags: "${{ inputs.target || 'tls' }}"
24 changes: 12 additions & 12 deletions .github/workflows/test-target.yml
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ jobs:
run: |
if [ '${{ inputs.pytest-args }}' = '-m flaky' ]; then
set +e # Disable immediate exit
ddev test --cov --junit ${{ inputs.target }} -- ${{ inputs.pytest-args }}
ddev test --cov --junit ${{ inputs.target }} -- ${{ inputs.pytest-args }} -- '-k "not fips"'
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -235,15 +235,15 @@ jobs:
exit $exit_code
fi
else
ddev test --cov --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- {0}', inputs.pytest-args) || '' }}
ddev test --cov --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- {0} -k "not fips"', inputs.pytest-args) || '-- -k "not fips"' }}
fi
- name: Run Unit & Integration tests with minimum version of base package
if: inputs.standard && inputs.minimum-base-package
run: |
if [ '${{ inputs.pytest-args }}' = '-m flaky' ]; then
set +e # Disable immediate exit
ddev test --compat --recreate --junit ${{ inputs.target }} -- ${{ inputs.pytest-args }}
ddev test --compat --recreate --junit ${{ inputs.target }} -- ${{ inputs.pytest-args }} -k "not fips"
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -253,7 +253,7 @@ jobs:
exit $exit_code
fi
else
ddev test --compat --recreate --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- {0}', inputs.pytest-args) || '' }}
ddev test --compat --recreate --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- {0} -k "not fips"', inputs.pytest-args) || '-- -k "not fips"' }}
fi
- name: Run E2E tests with latest base package
Expand All @@ -270,7 +270,7 @@ jobs:
# by default
if [ '${{ inputs.pytest-args }}' = '-m flaky' ]; then
set +e # Disable immediate exit
ddev env test --base --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }}
ddev env test --base --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }} -k "not fips"
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -281,7 +281,7 @@ jobs:
fi
elif [ '${{ inputs.pytest-args }}' = '-m "not flaky"' ]; then
set +e # Disable immediate exit
ddev env test --base --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }}
ddev env test --base --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }} -k "not fips"
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -291,7 +291,7 @@ jobs:
exit $exit_code
fi
else
ddev env test --base --new-env --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- all {0}', inputs.pytest-args) || '' }}
ddev env test --base --new-env --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- all {0} -k "not fips"', inputs.pytest-args) || '-- all -k "not fips"' }}
fi
- name: Run E2E tests
Expand All @@ -308,7 +308,7 @@ jobs:
# by default
if [ '${{ inputs.pytest-args }}' = '-m flaky' ]; then
set +e # Disable immediate exit
ddev env test --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }}
ddev env test --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }} -k "not fips"
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -319,7 +319,7 @@ jobs:
fi
elif [ '${{ inputs.pytest-args }}' = '-m "not flaky"' ]; then
set +e # Disable immediate exit
ddev env test --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }}
ddev env test --new-env --junit ${{ inputs.target }} -- all ${{ inputs.pytest-args }} -k "not fips"
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -329,7 +329,7 @@ jobs:
exit $exit_code
fi
else
ddev env test --new-env --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- all {0}', inputs.pytest-args) || '' }}
ddev env test --new-env --junit ${{ inputs.target }} ${{ inputs.pytest-args != '' && format('-- all {0} -k "not fips"', inputs.pytest-args) || '-- all -k "not fips"' }}
fi
- name: Run benchmarks
Expand All @@ -355,7 +355,7 @@ jobs:
# by default
if [ '${{ inputs.pytest-args }}' = '-m flaky' ]; then
set +e # Disable immediate exit
ddev env test --base --new-env --junit ${{ inputs.target }}:latest -- all ${{ inputs.pytest-args }}
ddev env test --base --new-env --junit ${{ inputs.target }}:latest -- all ${{ inputs.pytest-args }} -k "not fips"
exit_code=$?
if [ $exit_code -eq 5 ]; then
# Flaky test count can be zero, this is done to avoid pipeline failure
Expand All @@ -376,7 +376,7 @@ jobs:
exit $exit_code
fi
else
ddev env test --base --new-env --junit ${{ inputs.target }}:latest ${{ inputs.pytest-args != '' && format('-- all {0}', inputs.pytest-args) || '' }}
ddev env test --base --new-env --junit ${{ inputs.target }}:latest ${{ inputs.pytest-args != '' && format('-- all {0} -k "not fips"', inputs.pytest-args) || '-- all -k "not fips"' }}
fi
- name: View trace log
Expand Down
1 change: 1 addition & 0 deletions datadog_checks_base/changelog.d/19179.security
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add FIPS switch
5 changes: 5 additions & 0 deletions datadog_checks_base/datadog_checks/base/checks/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
import importlib
import inspect
import logging
import os
import re
import traceback
import unicodedata
Expand Down Expand Up @@ -46,6 +47,7 @@
from ..utils.agent.utils import should_profile_memory
from ..utils.common import ensure_bytes, to_native_string
from ..utils.diagnose import Diagnosis
from ..utils.fips import enable_fips
from ..utils.http import RequestsWrapper
from ..utils.limiter import Limiter
from ..utils.metadata import MetadataManager
Expand Down Expand Up @@ -307,6 +309,9 @@ def __init__(self, *args, **kwargs):
self.__formatted_tags = None
self.__logs_enabled = None

if os.environ.get("GOFIPS", "0") == "1":
enable_fips()

def _create_metrics_pattern(self, metric_patterns, option_name):
all_patterns = metric_patterns.get(option_name, [])

Expand Down
32 changes: 32 additions & 0 deletions datadog_checks_base/datadog_checks/base/utils/fips.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# (C) Datadog, Inc. 2024-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)

import os


def enable_fips(path_to_openssl_conf=None, path_to_openssl_modules=None):
path_to_embedded = None
if os.getenv("OPENSSL_CONF") is None:
if path_to_openssl_conf is None:
path_to_embedded = _get_embedded_path() if path_to_embedded is None else path_to_embedded
path_to_openssl_conf = path_to_embedded / "ssl" / "openssl.cnf"
if not path_to_openssl_conf.exists():
raise RuntimeError(f'The configuration file "{path_to_openssl_conf}" does not exist')
os.environ["OPENSSL_CONF"] = str(path_to_openssl_conf)

if os.getenv("OPENSSL_MODULES") is None:
if path_to_openssl_modules is None:
path_to_embedded = _get_embedded_path() if path_to_embedded is None else path_to_embedded
path_to_openssl_modules = path_to_embedded / "lib" / "ossl-modules"
if not path_to_openssl_conf.exists():
raise RuntimeError(f'The directory "{path_to_openssl_modules}" does not exist')
os.environ["OPENSSL_MODULES"] = str(path_to_openssl_modules)


def _get_embedded_path():
import sys
from pathlib import Path

embedded_dir = "embedded3" if os.name == 'nt' else "embedded"
return Path(sys.executable.split("embedded")[0] + embedded_dir)
17 changes: 17 additions & 0 deletions datadog_checks_base/tests/base/checks/test_agent_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
# Licensed under a 3-clause BSD style license (see LICENSE)
import json
import logging
import os
from typing import Any # noqa: F401

import mock
Expand Down Expand Up @@ -1293,3 +1294,19 @@ def test_detect_typos_configuration_models(
assert "Detected potential typo in configuration option" not in caplog.text

assert typos == set(unknown_options)


def test_env_var_logic_default():
with mock.patch.dict('os.environ', {'GOFIPS': '0'}):
AgentCheck()
assert os.getenv('OPENSSL_CONF', None) is None
assert os.getenv('OPENSSL_MODULES', None) is None


def test_env_var_logic_preset():
preset_conf = 'path/to/openssl.cnf'
preset_modules = 'path/to/ossl-modules'
with mock.patch.dict('os.environ', {'GOFIPS': '1', 'OPENSSL_CONF': preset_conf, 'OPENSSL_MODULES': preset_modules}):
AgentCheck()
assert os.getenv('OPENSSL_CONF', None) == preset_conf
assert os.getenv('OPENSSL_MODULES', None) == preset_modules
1 change: 1 addition & 0 deletions ddev/changelog.d/19179.security
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add FIPS switch
7 changes: 6 additions & 1 deletion ddev/src/ddev/e2e/agent/docker.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,12 @@ def start(self, *, agent_build: str, local_packages: dict[Path, str], env_vars:

if agent_build.startswith("datadog/"):
# Add a potentially missing `py` suffix for default non-RC builds
if 'rc' not in agent_build and 'py' not in agent_build and not re.match(AGENT_VERSION_REGEX, agent_build):
if (
'rc' not in agent_build
and 'py' not in agent_build
and 'fips' not in agent_build
and not re.match(AGENT_VERSION_REGEX, agent_build)
):
agent_build = f'{agent_build}-py{self.python_version[0]}'

if self.metadata.get('use_jmx') and not agent_build.endswith('-jmx'):
Expand Down
Loading

0 comments on commit c0d1c42

Please sign in to comment.