Ciph Lab | Remote | Equity-Only (Pre-Seed)
About Ciph Lab
Ciph Lab is building Intelligence Resources™—software that operationalizes responsible AI governance at scale. We're a 4-month-old AI governance company, AI-first and remote-first, transitioning from consultancy to agents and SaaS platform.
AI security isn't static—new jailbreaks, prompt injections, and model vulnerabilities emerge constantly. Traditional security assessments can't keep pace. We're building adaptive governance systems with security-by-design that evolve as the threat landscape changes.
The Opportunity
We're seeking a Principal AI Security & Risk Researcher to join our founding research team and lead our security track. This isn't traditional red teaming or pentesting—you'll be designing continuous security monitoring systems and building frameworks that help enterprises assess and mitigate AI risks at scale.
You'll research emerging AI threats (jailbreaks, prompt injections, model vulnerabilities), translate findings into actionable security frameworks, and collaborate with our technical team to build automated security testing and audit telemetry.
This is a founding research role with equity ownership in defining how organizations approach AI security.
What You'll Do
AI Security Research:
- Research emerging AI attack vectors, guardrail bypasses, and defense mechanisms
- Monitor threat intelligence feeds and security research communities
- Experiment with new AI security tools and assessment methodologies
- Stay current with LLM vulnerabilities, adversarial techniques, and model safety
Security Framework Design:
- Design security assessment frameworks for generative AI and agentic systems
- Develop risk evaluation methodologies that adapt as threats evolve
- Create audit telemetry and security monitoring protocols
- Translate security research into operational frameworks that enterprises can deploy
Building Adaptive Systems:
- Collaborate with the technical team to build automated security testing tools
- Design continuous threat monitoring and alerting systems
- Create security validation processes for framework updates
- Ensure monitoring systems themselves are secure (meta-security)
- Build audit trails for compliance documentation
Thought Leadership:
- Contribute to Ciph Lab's weekly newsletter on AI security and risk
- Position the company as a trusted voice in AI security governance
- Share insights publicly (while protecting proprietary methods)
What We're Looking For
Required:
- 5+ years in cybersecurity, with 2+ years focused on AI/ML security, red teaming, or adversarial testing
- Deep understanding of LLM architectures, prompt injection, jailbreaking, and model safety mechanisms
- Experience developing security testing frameworks or vulnerability assessment tools
- Strong research capabilities with ability to translate technical findings into actionable frameworks
Preferred:
- Experience with AI governance frameworks (NIST AI RMF, ISO 42001, EU AI Act)
- Background in enterprise risk assessment or security audit methodologies
- Familiarity with agent architectures, RAG systems, or multi-modal AI security
- Published work in AI security, adversarial ML, or related fields
Critical Attributes:
- Self-directed: You identify threats proactively, set research priorities, and drive security strategy without oversight
- Systems thinker: You see how security connects to governance, compliance, and technical implementation
- Continuous learner: You stay ahead of rapidly evolving AI threats and defense mechanisms
- Collaborative: You work effectively with legal, governance, and technical experts
- Disciplined remote worker: You manage time effectively, maintain momentum on long-term research, and show up consistently
What Makes This Different
Not your typical security role:
- You're building adaptive security infrastructure, not just finding vulnerabilities
- You work at the intersection of AI security, governance, and compliance
- You're designing living security frameworks that update as threats emerge
- You're shaping standards in an emerging field with limited precedent
High autonomy, flexible structure:
- Remote-first, manage your own schedule
- Weekly team meetings (Wednesdays 5-6 pm PT)
- Async collaboration via Slack and shared tools
- 5-10 hours/week commitment (scales up during peak periods)
Research-first culture:
- Time budgeted for learning and experimentation
- Expected to share discoveries and insights with the team
- Contribute to thought leadership and industry positioning
Commitment & Compensation
Time: 5-10 hours/week + 1 hour weekly meeting
Structure: Part-time, flexible, remote
Compensation: 0.5-2% equity (4-year vest, 1-year cliff)
Stage: Pre-seed, no current funding
This role is for someone who:
- Values equity ownership in defining AI security standards
- Wants a ground-floor opportunity in AI governance
- Sees AI security expertise as a high-value emerging specialty
- Thrives in ambiguity and early-stage environments
- Treats equity as motivation to build something meaningful
Success in This Role
First 30 days: Audit existing frameworks through a security lens, identify vulnerabilities, and propose a research roadmap
First 90 days: Deliver AI security assessment methodology, design threat monitoring strategy, and begin building security tools with the technical team
Ongoing: Keep frameworks secure as threats evolve, contribute thought leadership, and advance automated security testing
Why This Matters
AI governance without robust security is performative compliance. Organizations need frameworks that don't just check boxes—they genuinely reduce risk.
As AI threats evolve (and they will), enterprises need systems that automatically detect, assess, and respond to new vulnerabilities. Your work ensures that happens.
You'll help define what "auditable AI security" means in practice.
How to Apply
Send to founder@ciph-lab.com:
- Resume/CV
- Brief note (200-300 words) on your interest in AI security governance and what you'd bring to this role
We review applications on a rolling basis.
Ciph Lab is an equal opportunity employer. We value diverse perspectives and multidimensional talent.