-
Notifications
You must be signed in to change notification settings - Fork 1
Test re-scan message #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Test re-scan message #37
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
I reviewed this PR for LLM security vulnerabilities across the six defined vulnerability classes (Prompt Injection, Data Exfiltration, Secrets/PII in Prompts, Insecure Output Handling, Excessive Agency, and Jailbreak Risks). The PR adds a test file with traditional command injection code, but there is no LLM involvement in the code changes.
Minimum severity threshold for this scan: 🟡 Medium | Learn more
|
@promptfoo-scanner-staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
I reviewed this PR for LLM security vulnerabilities across all six classes (Prompt Injection, Data Exfiltration, PII/Secrets in Prompts, Insecure Output Handling, Excessive Agency, and Jailbreak Risks). The PR adds a test file with a command injection vulnerability, but this is a traditional security issue with no connection to LLM functionality.
Minimum severity threshold for this scan: 🟡 Medium | Learn more
To re-scan after making changes, comment @promptfoo-scanner on this PR.
|
@promptfoo-scanner-staging |
1 similar comment
|
@promptfoo-scanner-staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
This PR adds a test file with a traditional command injection vulnerability, but contains no LLM-related code or data flows. Since this is an LLM-specific security scanner, no LLM security vulnerabilities were identified.
Minimum severity threshold for this scan: 🟡 Medium
To re-scan after making changes, comment @promptfoo-scanner on this PR.
Learn more about code scanning
|
@promptfoo-scanner-staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
This PR adds a test file with traditional command injection vulnerability, but contains no LLM-related code or functionality. The scanner focuses exclusively on LLM-specific security issues.
Minimum severity threshold for this scan: 🟡 Medium | To re-scan after making changes, comment @promptfoo-scanner on this PR
Learn more about code scanning
|
@promptfoo-scanner-staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
This PR adds a test file (test-vuln.js) that contains a traditional command injection vulnerability. However, this scanner focuses exclusively on LLM security vulnerabilities. The command injection issue exists in standard Node.js code with no LLM involvement—no LLM API calls, prompt construction, or agent interactions were found.
Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more about code scanning
|
@promptfoo-scanner-staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
I reviewed this PR for LLM security vulnerabilities across all six vulnerability classes (Prompt Injection, Data Exfiltration, Secrets/PII in Prompts, Insecure Output Handling, Excessive Agency, and Jailbreak Risks). The PR adds a test file with a traditional command injection vulnerability, but contains no LLM-related code, AI framework imports, or LLM data flows.
Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more
Testing re-scan instructions in PR review comment