Skip to main content
Projects / Current Project

Rails Security Assessment Framework (Personal Project)

Personal project: Security assessment framework for evaluating Ruby on Rails application security, combining static analysis (SAST), dynamic testing (DAST), and manual code review. Built using AI-assisted development workflows (Cursor + Claude) to demonstrate systematic security assessment methodologies and tool integration capabilities. This is a learning and demonstration project, not production-deployed, showcasing security assessment knowledge and modern development workflows.

Security AI/ML
Web Development Automation Compliance
Python
Rails Security Assessment Framework (Personal Project)

Challenge

Enterprise Ruby on Rails applications require comprehensive security assessment frameworks that combine multiple analysis techniques to deliver actionable security insights. Traditional security tools lack systematic integration, automated reporting, and AI-assisted analysis capabilities. Organizations need a framework that demonstrates advanced security engineering methodologies with automated tooling and manual analysis integration.

Solution

Developed a systematic, production-ready framework for evaluating Rails application security using AI-assisted development workflows:

  • Multi-Modal Assessment: Combines static analysis (SAST), dynamic testing (DAST), and manual code review

  • Tool Integration: Integrates industry-standard security tools (Brakeman, Bundle Audit, Semgrep, Snyk, ZAP Proxy, Burp Suite)

  • AI-Assisted Analysis: Leverages AI for pattern recognition, threat modeling, and security analysis

  • Structured Findings: Categorizes findings according to Rails Security Guide standards

  • Automated Reporting: Generates comprehensive reports with remediation guidance

  • CI/CD Integration: Automated security checks in development pipeline

  • Compliance Validation: Checks against security standards and frameworks

The framework demonstrates advanced security engineering methodologies with automated tooling integration and AI-assisted analysis capabilities.

Code Examples

ETL Pipeline: Orchestrates security tool integration and finding normalization

main.py
main.py Python
def main(dry_run: bool = False, log_level: str = 'INFO') -> None:
    """Main entry point.

    Args:
        dry_run: Whether to skip writing output
        log_level: Logging level to use
    """
    setup_logging(log_level)
    logger = logging.getLogger('main')

    # Set up paths
    base_dir = Path(__file__).parent.parent.parent
    findings_dir = base_dir / 'findings'

    # Load config and schema
    config = load_config(str(base_dir / 'scripts' / 'etl' / 'config' / 'parser_configs.yaml'))
    schema = load_schema(str(base_dir / 'data' / 'schemas' / 'normalized_finding.json'))

    # Process SAST findings
    logger.info("Processing SAST findings")

    # Bearer
    bearer_files = glob.glob(str(findings_dir / 'sast' / 'bearer*.json'))
    bearer_parser = BearerParser(config.get('bearer', {}), schema)
    for file in bearer_files:
        process_file(file, bearer_parser, dry_run, logger)

    # Brakeman
    brakeman_files = glob.glob(str(findings_dir / 'sast' / 'brakeman*.json'))
    brakeman_parser = BrakemanParser(config.get('brakeman', {}), schema)
    for file in brakeman_files:
        process_file(file, brakeman_parser, dry_run, logger)

    # Snyk (chunked)
    snyk_parser = SnykParser(config.get('snyk', {}), schema)
    snyk_chunks = glob.glob(str(findings_dir / 'sast' / 'snyk-test-chunks' / '*.json'))
    for chunk in snyk_chunks:
        process_file(chunk, snyk_parser, dry_run, logger)

    snyk_code_chunks = glob.glob(str(findings_dir / 'sast' / 'snyk-code-test-chunks' / '*.json'))
    for chunk in snyk_code_chunks:
        process_file(chunk, snyk_parser, dry_run, logger)

    # Process DAST findings
    logger.info("Processing DAST findings")

    # Nikto
    nikto_files = glob.glob(str(findings_dir / 'dast' / 'nikto*.json'))
    nikto_parser = NiktoParser(config.get('nikto', {}), schema)
    for file in nikto_files:
        process_file(file, nikto_parser, dry_run, logger)

    # Wapiti
    wapiti_files = glob.glob(str(findings_dir / 'dast' / 'wapiti*.json'))
    wapiti_parser = WapitiParser(config.get('wapiti', {}), schema)
    for file in wapiti_files:
        process_file(file, wapiti_parser, dry_run, logger)

    # ZAP
    zap_files = glob.glob(str(findings_dir / 'dast' / 'zaproxy' / '*.json'))
// ... (truncated)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61

Finding Processor: Normalizes and consolidates security findings across tools

main.py
main.py Python
def process_file(
    file_path: str,
    parser: Any,
    dry_run: bool = False,
    logger: Optional[logging.Logger] = None
) -> List[Dict[str, Any]]:
    """Process a single findings file.

    Args:
        file_path: Path to findings file
        parser: Parser instance to use
        dry_run: Whether to skip writing output
        logger: Logger instance

    Returns:
        List of normalized findings
    """
    if logger is None:
        logger = logging.getLogger(__name__)

    logger.info(f"Processing {file_path}")

    try:
        with open(file_path, 'r') as f:
            content = json.load(f)

        findings = parser.parse(content)
        logger.info(f"Found {len(findings)} findings in {file_path}")

        if not dry_run and findings:
            # Create output directory if needed
            output_dir = Path(__file__).parent.parent.parent / 'data' / 'normalized_findings'
            os.makedirs(output_dir, exist_ok=True)

            # Generate output filename based on input file
            input_path = Path(file_path)
            tool_name = parser.source_tool if hasattr(parser, 'source_tool') else input_path.stem.split('_')[0]
            output_file = output_dir / f"{tool_name}_{input_path.stem}.json"

            # Write normalized findings
            with open(output_file, 'w') as f:
                json.dump(findings, f, indent=2)
            logger.info(f"Wrote {len(findings)} findings to {output_file}")

        return findings

    except Exception as e:
        logger.error(f"Failed to process {file_path}: {e}")
        if not parser.config.get('skip_on_error', False):
            raise
// ... (truncated)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

Key Metrics

Multi-modal assessment framework (SAST, DAST, Manual Review)

Integration with 6+ industry-standard security tools

AI-assisted analysis for pattern recognition and threat modeling

Automated reporting with structured findings and remediation guidance

Security Impact

Created a comprehensive security assessment framework that provides systematic evaluation of Rails application security through multiple analysis techniques. Built through AI-assisted development workflows, the framework combines automated tooling with manual analysis to deliver actionable security insights. Achieves comprehensive security assessment through structured methodologies. Suitable for security engineers, penetration testers, compliance validation, and enterprise security programs.

Results

Successfully delivered a comprehensive Rails security assessment framework that combines multiple analysis techniques with automated tooling integration. Built through AI-assisted development workflows, the framework provides systematic security evaluation with actionable insights, automated reporting, and compliance validation. Demonstrates advanced security engineering capabilities with tool integration and AI-assisted analysis.

Related Projects