FAQ
How the AI scan works, what human review covers, and how to get started.
CodeWatchdog combines automated AI analysis with senior engineer review to find vulnerabilities, logic errors, and architectural problems before production. Two layers: a fast AI Deep Scan on-page, and an optional Senior Dev Review where engineers audit your full codebase — not just what the AI flagged.
"Vibe coding" is building software through AI-assisted generation — prompts, accepted suggestions, shipped with minimal review. Fast development cycles via Cursor, Copilot, Claude. The risk: AI generates plausible code that can contain:
- Security vulnerabilities the model learned from training data
- Logic errors that only surface under specific conditions
- Subtle anti-patterns that compound over time
- Hallucinated API calls or incorrect library usage
- Missing input validation and error handling
Without a review layer, these ship to production. In fintech, smart contracts, or user-data systems, the consequences can be severe.
CodeWatchdog is built for:
- Developers who use AI tools to accelerate their workflow and want a reliable second pass
- Founders and product teams shipping AI-generated code to production
- Projects where security matters — fintech, DeFi, healthcare, SaaS with sensitive data
- Teams that don't have a dedicated security engineer
- Anyone who inherited AI-generated code and wants to understand its risk profile
The AI Deep Scan tool is live and available in private beta. You can access it now by requesting an access code via the contact page.
The Senior Dev Review (human audit) is available on request. Submit your project details via the contact form and we'll respond within 24 hours with scope and availability.
You paste your code (or upload a file) into the scan interface. Our system sends it to Claude — Anthropic's code analysis model — with a custom security-focused prompt engineered specifically for AI-generated code patterns.
Claude analyzes the code and returns a structured JSON report, which we parse and display as a categorized list of findings. Each finding includes a title, detailed description, line reference where applicable, and a concrete fix recommendation.
The full report can be downloaded as a PDF with all findings and fix suggestions included.
The AI scan supports any language Claude understands, which includes virtually all mainstream languages:
- JavaScript, TypeScript (including JSX/TSX)
- Python
- Solidity (smart contracts)
- Go, Rust, Java, C#, PHP
- Bash / Shell scripts
- SQL
- HTML, CSS, JSON, YAML, XML
File upload supports: .js .ts .jsx .tsx .py .sol .go .rs .php .java .cs .sh .sql .txt .md .html .css .json .yml .yaml
The scan is tuned to identify:
- Security vulnerabilities — SQL injection, XSS, CSRF, auth bypass, path traversal, command injection, insecure deserialization, exposed secrets and API keys
- Logic errors — off-by-one errors, incorrect null handling, wrong assumptions, race conditions, improper async/await usage
- AI-generated anti-patterns — over-trusting user input, missing error handling, hallucinated library methods, insecure defaults, incomplete validation
- Architecture issues — hardcoded values, missing input sanitization, broken access control patterns
- Dependency risks — visible use of outdated or known-vulnerable packages
AI analysis has high recall for known vulnerability patterns and consistently catches common issues. However, it can produce false positives (flagging something that isn't actually a problem) and false negatives (missing subtle, context-dependent issues).
For this reason, the AI scan is designed as a first pass, not a final authority. It is most effective when combined with a human review, particularly for security-critical code. The human layer validates AI findings, identifies issues the AI missed, and provides context-aware judgment.
The current limit is 60KB per scan, which covers most single files and many smaller modules. For larger codebases, we recommend breaking the review into logical sections or submitting a request for a full human review, which covers the entire codebase without size restrictions.
Most scans complete in 15 to 30 seconds, depending on code size and complexity. Larger or more complex files may take up to 60 seconds. There is no queue — results are returned in real time.
A human review is a full manual audit of your codebase conducted by a senior engineer with 10+ years of experience. It includes:
- Line-by-line review of security-critical paths
- Validation of business logic and edge case handling
- Architecture and design review — not just what the AI flagged
- Verification of dependency security and version risk
- Written report with specific findings, severity ratings, and fix guidance
- Direct access to the reviewer for questions and follow-up
The reviewer reads and understands your code, rather than just scanning it. This is the key distinction from an automated tool.
Our reviewers are senior engineers with deep specialization in security, backend systems, smart contracts, or the relevant technology stack for your project. All reviewers are vetted through a structured evaluation process before being onboarded.
We match reviewers to projects based on technology stack and domain expertise. You will know who is reviewing your code before the process begins.
Turnaround depends on the size and complexity of the codebase. Typical timelines:
- Small project (under 3K lines) — 24 to 48 hours
- Medium project (3K to 15K lines) — 3 to 5 business days
- Large or complex project — scoped individually
Expedited review is available on request. Contact us with your deadline and we will confirm whether it can be accommodated.
By default, the review delivers a written audit report with detailed findings and specific fix recommendations — enough for your development team to action directly.
Code remediation (actually writing the fixes) is available as an add-on service. This is scoped separately based on the number and complexity of issues found. Contact us after the review if you want us to implement the fixes.
Yes. When submitting a review request via the contact form, specify the technology stack and any specific areas of concern — for example, smart contract security, authentication systems, or API design. We will match you with the most appropriate reviewer from our network.
You receive a structured written report containing:
- Executive summary with overall risk rating
- Full list of findings categorized by severity (Critical, High, Medium, Low, Informational)
- Detailed description of each issue with code references
- Concrete, actionable fix guidance for every finding
- Reviewer notes on architectural considerations
The report is delivered as a PDF and, if applicable, as annotated code comments in your repository.
Yes. Your code is never shared publicly, indexed, or used for training purposes. For AI scans, code is transmitted to Anthropic's API under their data processing terms — they do not use API inputs to train models. For human reviews, your code is shared only with the assigned reviewer under a confidentiality obligation.
We do store scan metadata (language, score, session ID) in our database, but we do not store the raw code content after the scan is complete.
Yes. For human review engagements, we sign a mutual NDA as standard practice before any code is shared. Contact us via the contact form to initiate this.
The scan tool is currently in private beta and access is restricted by an access code. All API endpoints enforce rate limiting (10 scans per hour per IP) and require authentication. Infrastructure runs on Cloudflare Pages and Workers, with D1 as the database. All secrets are stored as Cloudflare environment secrets, never in source code.
Human reviews start at $499 for a focused module or component review. Full codebase audits and ongoing project contracts are scoped individually — pricing depends on size, complexity, and turnaround. Submit your details via the contact form and we will send a quote within 24 hours.
No upfront payment required to receive a quote. Custom retainer and contract arrangements available for teams.
The AI scan is priced at $20 per scan. Access during private beta requires an access code — request one via the contact form. For ongoing projects or volume needs, custom pricing is available.
For the AI scan: request beta access via the contact form — select "Beta Access" as the request type. We will send you an access code within 48 hours.
For a human review: submit your project details via the contact form, describe your codebase and what you need reviewed, and we will respond with availability and a quote.
Yes. Enterprise plans with volume pricing, dedicated reviewers, SLA commitments, and direct integration into your CI/CD pipeline are available. Contact us via the contact form selecting "Enterprise" as the request type.