Skip to main content
How We Work

Our Methodology

Quality in AI training data does not happen by accident. It is the result of deliberate design at every stage, who is allowed in, how they are trained, how work is structured, and how every submission is reviewed.

This page explains exactly how Pasiflora AI approaches each of those stages. No marketing language, just the actual process.

01

Expert Vetting

Every expert is reviewed by a human, not filtered by an algorithm.

  • Applicants submit credentials including degree, field of study, institution, and years of experience
  • LinkedIn profile is collected and reviewed as part of the application
  • Applications are evaluated by our team within 48 hours
  • Approval is based on domain expertise, credential quality, and fit with current task needs
  • Experts are matched only to tasks within their declared specialty areas
02

Bloom Training Program

Before touching a single task, every expert completes a structured onboarding program.

  • Platform orientation covering how tasks are structured, claimed, and submitted
  • Quality standards module covering what accuracy and consistency mean in AI training data
  • Task-type training for annotation, evaluation, generation, validation, and comparison tasks
  • Policies covering confidentiality, original work requirements, and deadline commitments
  • A no-AI-generation policy is explicitly covered, submissions must reflect the expert's own judgment
03

Structured Task Design

Every task is built to remove ambiguity before an expert starts.

  • Each task includes a written brief defining the objective, format, and scope
  • Worked examples are provided so experts can calibrate before starting
  • Edge case guidance covers how to handle unusual or ambiguous inputs
  • A scoring rubric defines exactly how the submission will be evaluated
  • Experts can ask clarifying questions through a per-task message thread before submitting
04

Review Workflow

Every submission goes through review before it is accepted.

  • 100% of task submissions enter a review queue, nothing is auto-approved
  • Reviewers evaluate submissions against the task rubric and provided examples
  • Submissions that do not meet the standard are returned with specific feedback
  • Experts can revise and resubmit within the task's revision window
  • A quality score tracks each expert's accuracy and consistency over time
05

Domain Specialization

Work is only assigned to experts with matching credentials.

  • Experts declare their specialty areas during the application process
  • Tasks are surfaced only to experts whose declared expertise matches the domain
  • Specialties span 431+ fields including medicine, law, physics, computer science, finance, and more
  • Clients specify the credential level required for their tasks, PhD, MD, JD, CFA, etc.
  • Domain matching is not keyword-based, it is tied to verified credentials from the application
06

Expert Accountability

Quality standards are enforced through a transparent scoring system.

  • Each expert maintains a quality score reflecting their submission accuracy and consistency
  • Missed deadlines and revision-heavy submissions are reflected in the score
  • Experts with declining scores receive reduced task access until the score recovers
  • Experts can pause their account when unavailable, without losing their standing or history
  • Repeated policy violations result in removal from the platform
What Experts Do

Task Types

Tasks on Pasiflora AI fall into five categories. Every task brief specifies which type it is, what format to use, and exactly how the output will be evaluated.

Annotation

Labeling clinical notes, legal documents, financial data, or research text with structured tags

Evaluation

Rating AI-generated responses for accuracy, completeness, reasoning quality, and safety

Generation

Writing expert-level explanations, summaries, Q&A pairs, or case analyses

Validation

Reviewing datasets for errors that only a subject-matter expert would recognize

Transparency

What We Don't Claim

We launched in April 2026. We are building carefully, not quickly. A few things worth saying plainly:

We are early. We do not publish expert counts or revenue figures because we are in our early stage. Our process is built to scale without compromising quality, but we are not going to cite numbers we cannot back up.
Quality is enforced, not guaranteed. Our review process catches most issues, but no system is perfect. When errors reach a client, we own them, investigate them, and use them to improve the process.
We are selective by design. Not every applicant is approved. Not every domain has open tasks at all times. We would rather have fewer, better-matched experts than a large pool of underutilized ones.

Questions about our process?

We are happy to walk through how any part of this works in more detail.