The report command will generate a high fidelity recording of a linting run. The report is a replayable data set that can be used to re-render any vacuum report using either the dashboard, or the html-report commands.

The original results are preserved and won’t be changed, regardless how many times the report is re-rendered.

vacuum report my-openapi-spec.yaml myreport

This will generate a report from your my-openapi-spec.yaml and will save the file as myreport-MM-DD-YY-HH-MM-SS.json

You can save these reports and replay them when ever you want, soon you will be able to replay multiple reports, over time!

Compression is best

One thing that I highly recommend, is using compression with vacuum reports. It’s all automatic, all that is required is to pass the -c or --compress flags.

The compressed file is significantly smaller than the original specification, tiny really.

When re-playing a compressed report, vacuum will automatically detect that it’s compressed, and unpack things.

vacuum report -c model/test_files/petstorev3.json myreport

This will generate a compressed report from your my-openapi-spec.yaml and will save the file as myreport-MM-DD-YY-HH-MM-SS.json.gz.


JUnit XML Support

Are you using Jenkins or some other CI that supports JUnit XML reports? vacuum can generate a JUnit XML report instead of a vacuum format JSON report.

Since v0.0.54, the -j or --junit flag can be used to generate a JUnit XML report instead of a vacuum format JSON report.

vacuum report -j model/test_files/petstorev3.json myreport

JUnit Failure Behavior

By default, only error severity violations create <failure> elements in the JUnit XML output. This means:

  • Errors create test failures (CI will fail)
  • Warnings, info, and hint create passing test cases (CI won’t fail)

This behavior allows you to track all linting results in your CI dashboard while only failing the build on actual errors.

If you want warnings to also fail your CI build, use the --junit-fail-on-warn flag:

vacuum report -j --junit-fail-on-warn my-spec.yaml myreport

With this flag:

  • Errors and warnings create test failures
  • Info and hint remain as passing test cases

Available Flags

report supports the following flags

Short Full Input Description
-c –compress bool Compress the report with gzip (recommended)
-n –no-pretty bool Render a machine-only version (can’t be used with -c)
-h –help bool Show help screen and all flag details
-q –no-style bool Disable color and style console output (useful for CI/CD)
-i –stdin bool Use stdin instead of reading OpenAPI spec from a file
-o –stdout bool Use stdout instead of writing report to a file
-j –junit bool Render a JUnit XML report (can’t be used with -c)
–junit-fail-on-warn bool Treat warnings as failures in JUnit report (default: only errors are failures)
–min-score int Set a minimum score threshold required (higher than 10), returns an error code if not met

Global Flags

report supports the following global flags

Short Full Input Description
-r –ruleset string Use an existing ruleset file for linting
-f –functions string Path to custom function plugins
-t –time bool Show how long vacuum took to run (ms)
-p –base string Base URL or Base working directory to use for relative references
-u –remote bool Load remote references automatically if possible (default is true)
-k –skip-check bool Skip checking for a valid OpenAPI document, useful for linting fragments or non-OpenAPI documents
-g –timeout int How long (in seconds to wait) for a rule before it times out (default is 5 seconds)
-z –hard-mode bool Enable every single built-in rule (including OWASP). Only for adventurers and brave souls.
–ext-refs bool Allow $ref pointers inside extension objects (x-) to be looked up
–cert-file string Path to client certificate file for HTTPS requests that require custom certificates
–key-file string Path to client private key file for HTTPS requests that require custom certificates
–ca-file string Path to CA certificate file for HTTPS requests that require custom certificates
–insecure bool Skip TLS certificate verification (insecure)
–allow-private-networks bool Allow fetch() to access private/local networks (localhost, 10.x, 192.168.x)
–allow-http bool Allow fetch() to use HTTP (non-HTTPS) URLs in custom JavaScript functions
–fetch-timeout int Timeout for fetch() requests in seconds (default 30)
–lookup-timeout int Node lookup timeout value value in ms (default 500ms).
–original string Path to original/old spec file for inline comparison (filters results to changed areas)
–changes string Path to change report JSON file for filtering results to changed areas only
–changes-summary bool Show summary of what was filtered by --changes or --original
–breaking-config string Path to breaking rules config file (default: ./changes-rules.yaml)
–warn-on-changes bool Inject warning violations for each detected API change
–error-on-breaking bool Inject error violations for each breaking change

Full flags begin with a double hyphen.

Examples

Want something easy to copy and paste?

Optimized for machines, without compression

vacuum report -n model/test_files/petstorev3.json petstore

Use an existing RuleSet with compression

vacuum report -r rulesets/examples/specific-ruleset.yaml \ -c model/test_files/petstorev3.json petstore

Use stdin and stdout

Here is an example of how to use stdin and stdout with the report command, and pipe the output to jq

echo "openapi: 3.0.1" | vacuum report -i -o | jq

Change Detection

The report command supports change detection to filter results to changed areas only:

vacuum report -c new-api.yaml --original old-api.yaml myreport

For comprehensive documentation on change detection features, see the Change Detection guide.

Compatible commands

Reports can be replayed through the following commands: