Troubleshooting
Common issues and frequently asked questions about Pacta.
Common Errors
"No analyzers found"
Error:
Cause: Pacta couldn't find any supported language analyzers for your codebase.
Solution:
- Ensure you have Python files (
.py) in the directories specified inroots - Check that your
architecture.ymlhas correctrootspaths: - Verify the paths are relative to your repository root
"Model validation failed"
Error:
Cause: Your architecture.yml has invalid structure or missing required fields.
Solution:
Check for these common issues:
-
Missing
versionfield (must be1): -
Missing
system.id: -
Invalid layer patterns (must be valid glob patterns):
"Rules parsing error"
Error:
Cause: Your rules.pacta.yml has invalid YAML syntax or rule structure.
Solution:
- Validate YAML syntax (use a YAML linter)
-
Ensure each rule has required fields:
-
Check condition syntax:
"Baseline not found"
Error:
Cause: You're trying to compare against a baseline that doesn't exist.
Solution:
-
Create a baseline first:
-
Check that
.pacta/directory exists and contains snapshots: -
If using CI, ensure the
.pacta/directory is cached or committed
"Permission denied" reading files
Error:
Cause: Pacta doesn't have read access to some files in your repository.
Solution:
- Check file permissions:
ls -la [file] - Ensure you're running Pacta with appropriate user permissions
- In CI, verify the checkout step has correct permissions
Frequently Asked Questions
How do I ignore test files?
Configure your architecture.yml to exclude test directories from the roots:
Or, if tests are inside src/, use layer patterns that exclude them:
Can I use multiple rules files?
Yes, use the --rules option multiple times:
pacta scan . \
--model architecture.yml \
--rules rules/base.pacta.yml \
--rules rules/strict.pacta.yml
Rules from all files are combined and evaluated together.
What languages are supported?
Currently supported:
- Python - Full support via AST analysis
Coming soon:
- Java
- Go
- C#
How do baselines work?
Baselines are content-addressed snapshots of your architecture at a point in time. They're stored in .pacta/snapshots/:
- Objects (
.pacta/snapshots/objects/) - Immutable snapshot files named by 8-char hash -
Refs (
.pacta/snapshots/refs/) - Named pointers (likebaseline,latest) to object hashes -
Create baseline: Saves current violations with a reference name
-
Compare against baseline: Only reports new violations
-
Violation statuses:
new- Violation introduced after baseline (fails CI)existing- Violation present in baseline (doesn't fail CI)fixed- Violation in baseline but now resolved
How do I see what changed between scans?
Use the diff command:
# Save two snapshots
pacta snapshot save . --ref v1
# ... make changes ...
pacta snapshot save . --ref v2
# Compare them
pacta diff . --from v1 --to v2
Can I run Pacta on a monorepo?
Yes. Define multiple containers in your architecture.yml:
containers:
service-a:
code:
roots: [services/service-a/src]
layers:
domain:
patterns: [services/service-a/src/domain/**]
infra:
patterns: [services/service-a/src/infra/**]
service-b:
code:
roots: [services/service-b/src]
layers:
domain:
patterns: [services/service-b/src/domain/**]
infra:
patterns: [services/service-b/src/infra/**]
Why am I seeing violations I didn't expect?
Common causes:
-
Glob patterns too broad: Check that your layer patterns don't overlap
-
Transitive dependencies: Module A imports B, B imports C. If A is in domain and C is in infra, you might see violations even if B is in application.
-
Re-exports: Python re-exports (e.g.,
from .submodule import *) can create unexpected dependencies.
Debug with verbose output:
How do I track architecture metrics over time?
Use the history commands:
# View timeline of snapshots
pacta history show . --last 20
# View violation trends
pacta history trends . --metric violations
# View coupling trends (edges/nodes ratio)
pacta history trends . --metric density
# Export as image for documentation
pacta history trends . --output trends.png
What's the performance impact on large codebases?
Pacta parses Python AST, which is fast but scales with codebase size. For large codebases:
- Limit scope: Only include relevant directories in
roots - Use quiet mode:
-qreduces output processing time - Incremental checks: Consider
--mode changed_only(if supported)
Typical performance: - Small projects (<100 files): <1 second - Medium projects (100-1000 files): 1-5 seconds - Large projects (1000+ files): 5-30 seconds
Getting Help
If you're stuck:
- Check the CLI Reference for command options
- Look at the example project
- Open an issue on GitHub