Our Methodology
Quality in AI training data does not happen by accident. It is the result of deliberate design at every stage, who is allowed in, how they are trained, how work is structured, and how every submission is reviewed.
This page explains exactly how Pasiflora AI approaches each of those stages. No marketing language, just the actual process.
Expert Vetting
Every expert is reviewed by a human, not filtered by an algorithm.
- Applicants submit credentials including degree, field of study, institution, and years of experience
- LinkedIn profile is collected and reviewed as part of the application
- Applications are evaluated by our team within 48 hours
- Approval is based on domain expertise, credential quality, and fit with current task needs
- Experts are matched only to tasks within their declared specialty areas
Bloom Training Program
Before touching a single task, every expert completes a structured onboarding program.
- Platform orientation covering how tasks are structured, claimed, and submitted
- Quality standards module covering what accuracy and consistency mean in AI training data
- Task-type training for annotation, evaluation, generation, validation, and comparison tasks
- Policies covering confidentiality, original work requirements, and deadline commitments
- A no-AI-generation policy is explicitly covered, submissions must reflect the expert's own judgment
Structured Task Design
Every task is built to remove ambiguity before an expert starts.
- Each task includes a written brief defining the objective, format, and scope
- Worked examples are provided so experts can calibrate before starting
- Edge case guidance covers how to handle unusual or ambiguous inputs
- A scoring rubric defines exactly how the submission will be evaluated
- Experts can ask clarifying questions through a per-task message thread before submitting
Review Workflow
Every submission goes through review before it is accepted.
- 100% of task submissions enter a review queue, nothing is auto-approved
- Reviewers evaluate submissions against the task rubric and provided examples
- Submissions that do not meet the standard are returned with specific feedback
- Experts can revise and resubmit within the task's revision window
- A quality score tracks each expert's accuracy and consistency over time
Domain Specialization
Work is only assigned to experts with matching credentials.
- Experts declare their specialty areas during the application process
- Tasks are surfaced only to experts whose declared expertise matches the domain
- Specialties span 431+ fields including medicine, law, physics, computer science, finance, and more
- Clients specify the credential level required for their tasks, PhD, MD, JD, CFA, etc.
- Domain matching is not keyword-based, it is tied to verified credentials from the application
Expert Accountability
Quality standards are enforced through a transparent scoring system.
- Each expert maintains a quality score reflecting their submission accuracy and consistency
- Missed deadlines and revision-heavy submissions are reflected in the score
- Experts with declining scores receive reduced task access until the score recovers
- Experts can pause their account when unavailable, without losing their standing or history
- Repeated policy violations result in removal from the platform
Task Types
Tasks on Pasiflora AI fall into five categories. Every task brief specifies which type it is, what format to use, and exactly how the output will be evaluated.
Labeling clinical notes, legal documents, financial data, or research text with structured tags
Rating AI-generated responses for accuracy, completeness, reasoning quality, and safety
Writing expert-level explanations, summaries, Q&A pairs, or case analyses
Reviewing datasets for errors that only a subject-matter expert would recognize
What We Don't Claim
We launched in April 2026. We are building carefully, not quickly. A few things worth saying plainly:
Questions about our process?
We are happy to walk through how any part of this works in more detail.